P-MARL: Prediction-Based Multi-Agent Reinforcement Learning for Non-Stationary Environments

نویسندگان

  • Andrei Marinescu
  • Ivana Dusparic
  • Adam Taylor
  • Vinny Cahill
  • Siobhán Clarke
چکیده

Multi-Agent Reinforcement Learning (MARL) is a widely-used technique for optimization in decentralised control problems, addressing complex challenges when several agents change actions simultaneously and without collaboration. Such challenges are exacerbated when the environment in which the agents learn is inherently non-stationary, as agents’ actions are then non-deterministic. In this paper, we show that advance knowledge of environment behaviour through prediction significantly improves agents’ performance in converging to near-optimal control solutions. We propose P-MARL, a MARL approach which employs a prediction mechanism to obtain such advance knowledge, which is then used to improve agents’ learning. The underlying non-stationary behaviour of the environment is modelled as a time-series and prediction is based on historic data and key environment variables. This provides information regarding potential upcoming changes in the environment, which is a key influencer in agents’ decision-making. We evaluate P-MARL in a smart grid scenario and show that a 92% Pareto efficient solution can be achieved in an electric vehicle charging problem, where energy demand across a community of households is inherently non-stationary. Finally, we analyse the effects of environment prediction accuracy on the performance of our approach.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-Agent Learning with Policy Prediction

Due to the non-stationary environment, learning in multi-agent systems is a challenging problem. This paper first introduces a new gradient-based learning algorithm, augmenting the basic gradient ascent approach with policy prediction. We prove that this augmentation results in a stronger notion of convergence than the basic gradient ascent, that is, strategies converge to a Nash equilibrium wi...

متن کامل

Decentralised Multi-Agent Reinforcement Learning for Dynamic and Uncertain Environments

Multi-Agent Reinforcement Learning (MARL) is a widely used technique for optimization in decentralised control problems. However, most applications of MARL are in static environments, and are not suitable when agent behaviour and environment conditions are dynamic and uncertain. Addressing uncertainty in such environments remains a challenging problem for MARL-based systems. The dynamic nature ...

متن کامل

Three Perspectives on Multi-Agent Reinforcement Learning

This chapter concludes three perspectives on multi-agent reinforcement learning (MARL): (1) cooperative MARL, which performs mutual interaction between cooperative agents; (2) equilibrium-based MARL, which focuses on equilibrium solutions among gaming agents; and (3) best-response MARL, which suggests a no-regret policy against other competitive agents. Then the authors present a general framew...

متن کامل

A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning

To achieve general intelligence, agents must learn how to interact with others in a shared environment: this is the challenge of multiagent reinforcement learning (MARL). The simplest form is independent reinforcement learning (InRL), where each agent treats its experience as part of its (non-stationary) environment. In this paper, we first observe that policies learned using InRL can overfit t...

متن کامل

Plan-based reward shaping for multi-agent reinforcement learning

Recent theoretical results have justified the use of potential-based reward shaping as a way to improve the performance of multi-agent reinforcement learning (MARL). However, the question remains of how to generate a useful potential function. Previous research demonstrated the use of STRIPS operator knowledge to automatically generate a potential function for single-agent reinforcement learnin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015